OpenAI is finally letting people know when they see an AI-generated image with all the details about the origin of the content.AI 

OpenAI Introduces Watermarking Technology for AI Images Produced by DALL-E 3

OpenAI understands the need to inform people when they see an AI-generated image, especially images generated by its own DALL-E 3 platform, and for this, the company offers watermarks. OpenAI has confirmed that it uses an open standard established by the Coalition for Content Provenance and Authenticity (C2PA). This means that any AI image created with DALL-E 3 will have metadata, such as the name of the AI tool used to create the image.

It is an interesting coincidence that OpenAI is introducing its own version of the watermark at a time when Meta has been talking about regulating the content produced by artificial intelligence and keeping the identity very clear to the viewer.

And since OpenAI also integrates DALL-E 3 with ChatGPT for creating AI images, this new metadata information will be coming to the AI chatbot by February 12th. Getting information about the origin of the image may increase the file size. OpenAI, but it has no effect on the quality of the visual image, which may be more important to people.

It’s great to see OpenAI bring these changes to AI-generated images, but the company has highlighted a major loophole around the feature that can be easily overlooked by humans.

OpenAI has pointed out that social media platforms tend to remove metadata. You can also lose detail if you take a screenshot of said image, which is pretty easy to remove the origin of the image, especially when they are created by AI.

Situations like these make it clear that strong regulation of AI content is critical, and all tech companies need to work together to make it safer to create and consume AI-generated images, videos and more.

Related posts

Leave a Comment